792 research outputs found

    Principal components analysis on audiograms from a hearing aid clinic

    Get PDF
    In this study we describe a Principal Components Analysis (PCA) of 11,462 audiograms recorded at the hearing aid clinic at James Cook University Hospital in Middlesbrough between 1992 and 2001. PCA is a multivariate statistical technique which starts with an n x p matrix in which n subjects are each evaluated on each of p variables (Woods et al, 1986). In our case the n subjects were represented by the 11,462 audiograms, and the p variables were the six air conduction thresholds and five bone conduction thresholds typically obtained in an audiogram. Although the patients were originally tested at 11 thresholds, the principle of PCA is that certain hearing thresholds tend to vary together, and thus can be grouped into a smaller number of underlying variables called principal components (PC). Each PC has a set of coefficients in the range -1 to +1, corresponding to the degree of influence of each of the original thresholds on that PC. The coefficients of the first PC were all negative and approximately equal. This suggests that the main source of variation between the patients was simply the overall degree of hearing loss. The coefficients of the second PC were negative for frequencies at or below 1000Hz, but positive for higher frequencies, for both air and bone conduction, and thus differentiate patients according to whether they have a predominanty high frequency or low frequency hearing loss. The coefficients of the third PC were negative for air conduction at all frequencies, but positive for bone conduction, showing a contrast between patients with and without an air-bone gap. The fourth component is similar to the second, but corresponds to a sensorineural hearing loss with a sharper dip at 2000 – 4000 Hz rather than a general high frequency hearing loss. No clear patterns were seen for the fifth or subsequent principal components. The percentage of the overall variability in the data explained by the first four principal components respectively was 59.5, 13.4, 9.7, and 5.2, giving a total of 87.8%. We performed PCA using the MATLAB statistical toolbox

    Decision support system for the selection of an ITE or a BTE hearing aid

    Get PDF
    The purpose of this research is to mine a large set of heterogeneous audiology data to create a decision support system (DSS) to choose between two hearing aid types (ITE and BTE aid). This research is based on the data analysis of audiology data using various statistical and data mining techniques. It uses the data of a large NHS (National Health Services, UK) facility. It uses 180,000 records (covering more than 23,000 different patients) from a hearing aid clinic. The developed system uses an unconventional method to predict hearing aid type for a patient and it can be used as a second opinion by audiologists for complex cases. After modifying the system to take account of the feedback from a professional audiologist, the success rates obtained were in the ranges 63 to 66 percent. In this research an automatic system was developed to choose between an ITE or a BTE hearing aid type with an explanation facility that can be used as a second opinion by audiologist in cases where the choice of an ITE or a BTE hearing aid is not clear cut. This analysis of audiology data and DSS will provide supplementary information for audiology experts and hearing aid dispensers. This type of system may also be of interest to manufacturers of hearing technologies in using as a ready means for their telephone customer services staff to check data, discovering data in audiology records will also be good for general awareness about the suitability of hearing aid type

    Statistical analysis of the tables in Mahadevan’s Concordance of the Indus Valley Script

    Get PDF
    NJQL-2017-0037R2The Indus Script originates from the culture known as the Indus Valley Civilization which flourished from approximately 2600 to 1900 BC. Several thousand objects bearing these signs have been found over a wide area of Northern India and Pakistan. In 1977 Iravatham Mahadevan published a concordance of all of the scripts that had been discovered so far. Accompanying the concordance are a set of 9 tables showing the distribution of individual signs by position, archaeological site, object type, field symbol (accompanying image), and direction of writing. Analysis of the frequencies of the signs found so far using Large Numbers of Rare Events (LNRE) models enabled the total vocabulary of the language, including signs not yet found, to be about 857. All the tables were analysed using Pearson’s residuals, and it was found that the signs were not randomly distributed, but some showed statistically significant associations with position, object, field symbol or direction of writing. A more detailed analysis of the relation between signs and field symbols was made using correspondence analysis, which showed that certain signs were associated with the unicorn symbol, while others were associated with the gharial and dotted circle symbols

    Filtering, modeling, and control of axial force signals in friction stir welding processes

    Get PDF
    In Friction Stir Welding Processes, good contact between tool and work piece can be accomplished through control of the axial force signals. A method of stochastic modeling is introduced and used in conjunction with a Kalman filter to develop empirical static and dynamic models relating the axial force to input process parameters. The filtering method reduces signal variance by an order of magnitude. The models are experimentally validated and used to design and implement a general tracking controller with disturbance rejection for axial force control. Online control of the axial force is experimentally validated for bead-on-plate welds using a 6061 aluminum alloy for constant and sinusoidal axial force reference signals --Abstract, page iv

    The “Diktat für Schlick”: Authorship Research and Computational Stylometry Revisited

    Get PDF
    Forthcoming in: Wittgenstein and the Vienna Circle , ed. Friedrich Stadler. Springer.Both the authorship and the dating of the so-called “Diktat für Schlick” (DFS), once attributed to Ludwig Wittgenstein and assigned by Georg Henrik von Wright to the Wittgenstein Nachlass as item 302, are debated topics in Wittgenstein and Vienna Circle research. Schulte (2011) and Manninen (2011) hold that DFS was authored by Friedrich Waismann rather than Wittgenstein. Applying techniques from computational stylometry to the authorship question, the paper concludes that DFS is located stylometrically in the middle between Waismann’s and Wittgenstein’s writings, but slightly closer to Wittgenstein, and so Wittgenstein authorship is hence stylometrically still not unlikely. The paper concludes by presenting a number of factors that speak in favour of the view that DFS might originally indeed have been dictated by Wittgenstein. For the computational stylometry component, the paper uses the Eder, Rybicki and Kestemont’s (2016) “Stylometry with R” package; the degree of similarity and dissimilarity between documents is calculated by Burrows' Delta measure; and the results are displayed using Hierarchical Cluster Analysis and Principal Components Analysis. For the text corpus part, the paper uses texts authored by Schlick, Waismann and Wittgenstein. For the archival research part, the paper refers to materials form the Schlick Nachlass in the North Holland Archives, the Waismann Nachlass in the Bodleian Libraries, the Rose Rand Nachlass in the Pittsburgh Archives of Scientific Philosophy, the Ludwig Wittgenstein Nachlass in the Trinity College Cambridge Wren Library, and the Cornell copy of the Ludwig Wittgenstein Nachlass. The paper is a follow-up on Oakes and Pichler (2013); for the current paper we have extended the Waismann text corpus with more texts written under the influence of Wittgenstein, a.o. Logik, Sprache, Philosophie (1976).submittedVersio

    Relation of modifiable neighborhood attributes to walking

    Get PDF
    Abstract Background There is a paucity of research examining associations between walking and environmental attributes that are more modifiable in the short term, such as car parking availability, access to transit, neighborhood traffic, walkways and trails, and sidewalks. Methods Adults were recruited between April 2004 and September 2006 in the Minneapolis-St Paul metropolitan area and in Montgomery County, Maryland using similar research designs in the two locations. Self-reported and objective environmental measures were calculated for participants\u27 neighborhoods. Self-reported physical activity was collected through the long form of the International Physical Activity Questionnaire (IPAQ-LF). Generalized estimating equations were used to examine adjusted associations between environmental measures and transport and overall walking. Results Participants (n = 887) averaged 47 years of age (SD = 13.65) and reported 67 min/week (SD = 121.21) of transport walking and 159 min/week (SD = 187.85) of non-occupational walking. Perceived car parking difficulty was positively related to higher levels of transport walking (OR 1.41, 95%CI: 1.18, 1.69) and overall walking (OR 1.18, 95%CI: 1.02, 1.37). Self-reported ease of walking to a transit stop was negatively associated with transport walking (OR 0.86, 95%CI: 0.76, 0.97), but this relationship was moderated by perceived access to destinations. Walking to transit also was related to non-occupational walking (OR 0.85, 95%CI: 0.73, 0.99). Conclusions Parking difficulty and perceived ease of access to transit are modifiable neighborhood characteristics associated with self-reported walking

    Modeling and analysis of a portable, solid-state neutron detection system for spectroscopic applications

    Get PDF
    Title from PDF of title page (University of Missouri--Columbia, viewed on May 15, 2013).The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file.Dissertation advisor: Dr. William MillerIncludes bibliographical references.Vita.Ph.D. University of Missouri--Columbia 2012."May 2012"This paper discusses a new neutron detection system that allows local volumetric identification of fast neutron thermalization in the context of forming a solid state Bonner-like neutron spectrometer. The resulting departure and subsequent improvement from the classical Bonner spectrometer is that the entire moderating volume is sampled locally for thermal neutrons. Such volumetric resolution is possible through the layering of weakly perturbing and pixilated high thermal efficiency solid state neutron detectors into a cylindrically symmetric neutron moderator. The overall system exhibits >10% total detection efficiency over the neutron energy range from thermal to 20 MeV and the data can be acquired simultaneously from all detector elements in a single measurement. These measurements can be used to infer information on incident neutron energy spectra and direction, which provides capabilities not available in current systems. The end result is a highly efficient, man-portable device with significantly improved methods for determination of pervading neutron energy spectra and the corresponding dose equivalent.Includes bibliographical reference

    Towards automatic generation of relevance judgments for a test collection

    Get PDF
    This paper represents a new technique for building a relevance judgment list for information retrieval test collections without any human intervention. It is based on the number of occurrences of the documents in runs retrieved from several information retrieval systems and a distance based measure between the documents. The effectiveness of the technique is evaluated by computing the correlation between the ranking of the TREC systems using the original relevance judgment list (qrels) built by human assessors and the ranking obtained by using the newly generated qrels

    Using key phrases as new queries in building relevance judgements automatically

    Get PDF
    We describe a new technique for building a relevance judgment list (qrels) for TREC test collections with no human intervention. For each TREC topic, a set of new queries is automatically generated from key phrases extract-ed from the top k documents retrieved from 12 different Terrier weighting models when the initial TREC topic is submitted. We assign a score to each key phrase based on its similarity to the original TREC topic. The key phrases with the highest scores become the new queries for a second search, this time using the Terrier BM25 weighting model. The union of the documents retrieved forms the automatically-build set of qrels
    • …
    corecore